39 research outputs found
A Review of Smart Materials in Tactile Actuators for Information Delivery
As the largest organ in the human body, the skin provides the important
sensory channel for humans to receive external stimulations based on touch. By
the information perceived through touch, people can feel and guess the
properties of objects, like weight, temperature, textures, and motion, etc. In
fact, those properties are nerve stimuli to our brain received by different
kinds of receptors in the skin. Mechanical, electrical, and thermal stimuli can
stimulate these receptors and cause different information to be conveyed
through the nerves. Technologies for actuators to provide mechanical,
electrical or thermal stimuli have been developed. These include static or
vibrational actuation, electrostatic stimulation, focused ultrasound, and more.
Smart materials, such as piezoelectric materials, carbon nanotubes, and shape
memory alloys, play important roles in providing actuation for tactile
sensation. This paper aims to review the background biological knowledge of
human tactile sensing, to give an understanding of how we sense and interact
with the world through the sense of touch, as well as the conventional and
state-of-the-art technologies of tactile actuators for tactile feedback
delivery
Robust Deep Multi-Modal Sensor Fusion using Fusion Weight Regularization and Target Learning
Sensor fusion has wide applications in many domains including health care and
autonomous systems. While the advent of deep learning has enabled promising
multi-modal fusion of high-level features and end-to-end sensor fusion
solutions, existing deep learning based sensor fusion techniques including deep
gating architectures are not always resilient, leading to the issue of fusion
weight inconsistency. We propose deep multi-modal sensor fusion architectures
with enhanced robustness particularly under the presence of sensor failures. At
the core of our gating architectures are fusion weight regularization and
fusion target learning operating on auxiliary unimodal sensing networks
appended to the main fusion model. The proposed regularized gating
architectures outperform the existing deep learning architectures with and
without gating under both clean and corrupted sensory inputs resulted from
sensor failures. The demonstrated improvements are particularly pronounced when
one or more multiple sensory modalities are corrupted.Comment: 8 page
CSPN++: Learning Context and Resource Aware Convolutional Spatial Propagation Networks for Depth Completion
Depth Completion deals with the problem of converting a sparse depth map to a
dense one, given the corresponding color image. Convolutional spatial
propagation network (CSPN) is one of the state-of-the-art (SoTA) methods of
depth completion, which recovers structural details of the scene. In this
paper, we propose CSPN++, which further improves its effectiveness and
efficiency by learning adaptive convolutional kernel sizes and the number of
iterations for the propagation, thus the context and computational resources
needed at each pixel could be dynamically assigned upon requests. Specifically,
we formulate the learning of the two hyper-parameters as an architecture
selection problem where various configurations of kernel sizes and numbers of
iterations are first defined, and then a set of soft weighting parameters are
trained to either properly assemble or select from the pre-defined
configurations at each pixel. In our experiments, we find weighted assembling
can lead to significant accuracy improvements, which we referred to as
"context-aware CSPN", while weighted selection, "resource-aware CSPN" can
reduce the computational resource significantly with similar or better
accuracy. Besides, the resource needed for CSPN++ can be adjusted w.r.t. the
computational budget automatically. Finally, to avoid the side effects of noise
or inaccurate sparse depths, we embed a gated network inside CSPN++, which
further improves the performance. We demonstrate the effectiveness of CSPN++on
the KITTI depth completion benchmark, where it significantly improves over CSPN
and other SoTA methods.Comment: Camera Ready Version. Accepted by AAAI 202